13 research outputs found

    Using child-friendly movie stimuli to study the development of face, place, and object regions from age 3 to 12 years

    Get PDF
    Scanning young children while they watch short, engaging, commercially‐produced movies has emerged as a promising approach for increasing data retention and quality. Movie stimuli also evoke a richer variety of cognitive processes than traditional experiments, allowing the study of multiple aspects of brain development simultaneously. However, because these stimuli are uncontrolled, it is unclear how effectively distinct profiles of brain activity can be distinguished from the resulting data. Here we develop an approach for identifying multiple distinct subject‐specific Regions of Interest (ssROIs) using fMRI data collected during movie‐viewing. We focused on the test case of higher‐level visual regions selective for faces, scenes, and objects. Adults (N = 13) were scanned while viewing a 5.6‐min child‐friendly movie, as well as a traditional localizer experiment with blocks of faces, scenes, and objects. We found that just 2.7 min of movie data could identify subject‐specific face, scene, and object regions. While successful, movie‐defined ssROIS still showed weaker domain selectivity than traditional ssROIs. Having validated our approach in adults, we then used the same methods on movie data collected from 3 to 12‐year‐old children (N = 122). Movie response timecourses in 3‐year‐old children's face, scene, and object regions were already significantly and specifically predicted by timecourses from the corresponding regions in adults. We also found evidence of continued developmental change, particularly in the face‐selective posterior superior temporal sulcus. Taken together, our results reveal both early maturity and functional change in face, scene, and object regions, and more broadly highlight the promise of short, child‐friendly movies for developmental cognitive neuroscience

    Seeing a straight line on a curved surface: decoupling of patterns from surfaces by single IT neurons

    No full text
    We have no difficulty seeing a straight line drawn on a paper even when the paper is bent, but this inference is in fact nontrivial. Doing so requires either matching local features or representing the pattern after factoring out the surface shape. Here we show that single neurons in the monkey inferior temporal (IT) cortex show invariant responses to patterns across rigid and nonrigid changes of surfaces. We recorded neuronal responses to stimuli in which the pattern and the surrounding surface were varied independently. In a subset of neurons, we found pattern-surface interactions that produced similar responses to stimuli across congruent pattern and surface transformations. These interactions produced systematic shifts in curvature tuning of patterns when overlaid on convex and flat surfaces. Our results show that surfaces are factored out of patterns by single neurons, thereby enabling complex perceptual inferences. NEW & NOTEWORTHY We have no difficulty seeing a straight line on a curved piece of paper, but in fact, doing so requires decoupling the shape of the surface from the pattern itself. Here we report a novel form of invariance in the visual cortex: single neurons in monkey inferior temporal cortex respond similarly to congruent transformations of patterns and surfaces, in effect decoupling patterns from the surface on which they are overlaid

    Dynamics of 3D view invariance in monkey inferotemporal cortex

    No full text
    Rotations in depth are challenging for object vision because features can appear, disappear, be stretched or compressed. Yet we easily recognize objects across views. Are the underlying representations view invariant or dependent? This question has been intensely debated in human vision, but the neuronal representations remain poorly understood. Here, we show that for naturalistic objects, neurons in the monkey inferotemporal (IT) cortex undergo a dynamic transition in time, whereby they are initially sensitive to viewpoint and later encode view-invariant object identity. This transition depended on two aspects of object structure: it was strongest when objects foreshortened strongly across views and were similar to each other. View invariance in IT neurons was present even when objects were reduced to silhouettes, suggesting that it can arise through similarity between external contours of objects across views. Our results elucidate the viewpoint debate by showing that view invariance arises dynamically in IT neurons out of a representation that is initially view dependent

    Computational models of category-selective brain regions enable high-throughput tests of selectivity

    No full text
    AbstractCortical regions apparently selective to faces, places, and bodies have provided important evidence for domain-specific theories of human cognition, development, and evolution. But claims of category selectivity are not quantitatively precise and remain vulnerable to empirical refutation. Here we develop artificial neural network-based encoding models that accurately predict the response to novel images in the fusiform face area, parahippocampal place area, and extrastriate body area, outperforming descriptive models and experts. We use these models to subject claims of category selectivity to strong tests, by screening for and synthesizing images predicted to produce high responses. We find that these high-response-predicted images are all unambiguous members of the hypothesized preferred category for each region. These results provide accurate, image-computable encoding models of each category-selective region, strengthen evidence for domain specificity in the brain, and point the way for future research characterizing the functional organization of the brain with unprecedented computational precision.</jats:p

    Visual experience is not necessary for the development of face-selectivity in the lateral fusiform gyrus

    No full text
    © 2020 National Academy of Sciences. All rights reserved. The fusiform face area responds selectively to faces and is causally involved in face perception. How does face-selectivity in the fusiform arise in development, and why does it develop so systematically in the same location across individuals? Preferential cortical responses to faces develop early in infancy, yet evidence is conflicting on the central question of whether visual experience with faces is necessary. Here, we revisit this question by scanning congenitally blind individuals with fMRI while they haptically explored 3D-printed faces and other stimuli. We found robust face-selective responses in the lateral fusiform gyrus of individual blind participants during haptic exploration of stimuli, indicating that neither visual experience with faces nor fovea-biased inputs is necessary for face-selectivity to arise in the lateral fusiform gyrus. Our results instead suggest a role for long-range connectivity in specifying the location of face-selectivity in the human brain
    corecore